# Unsupervised Pre-training
Vit Base Patch16 224.dino
Apache-2.0
A Vision Transformer (ViT) image feature model trained with self-supervised DINO method, suitable for image classification and feature extraction tasks.
Image Classification
Transformers

V
timm
33.45k
5
Gpt2 Distil Chinese Cluecorpussmall
A lightweight Chinese GPT2 model pre-trained on CLUECorpusSmall, with 6 layers/768 hidden units, suitable for Chinese text generation tasks
Large Language Model Chinese
G
uer
1,043
20
T5 V1 1 Large
Apache-2.0
T5 1.1 is Google's improved text-to-text transfer model, utilizing GEGLU activation function and optimized architecture, focusing on unsupervised pre-training
Large Language Model English
T
google
111.29k
17
Flaubert Base Cased
MIT
FlauBERT is a French BERT model pre-trained on a large-scale French corpus, developed by the French National Center for Scientific Research.
Large Language Model
Transformers French

F
flaubert
4,253
8
Featured Recommended AI Models